Britain's new government aims to regulate most powerful AI models

Britain's new government aims to regulate most powerful AI models

Technology

Starmer has promised to introduce new laws on AI

Follow on
Follow us on Google News
 

LONDON (Reuters) - Britain's new Labour government has said it will explore how to effectively regulate artificial intelligence models, but stopped short of proposing any specific laws.

King Charles set out newly-elected Prime Minister Keir Starmer's legislative agenda in a speech on Wednesday to open the new session of parliament. It included more than 35 new bills covering everything from housing to cyber security measures. 

The government said it would seek to establish the appropriate legislation to place requirements on those working to develop "the most powerful artificial intelligence models."

The country's last prime minister Rishi Sunak had sought to position Britain as a world leader in AI safety, bringing world leaders and company executives together last November for a summit at Bletchley Park to discuss the issue.

He also oversaw the launch of the world's first AI Safety Institute, which has focused on the capabilities of "frontier" AI models, such as those behind OpenAI's highly successful ChatGPT chatbot.

"AI labs will be collectively breathing a sigh of relief at the government's decision not to rush ahead with frontier model regulation," said Nathan Benaich, founding partner of AI-focused investment group Air Street Capital.

Under Sunak, the government avoided introducing targeted AI regulation, opting instead to split responsibility for scrutinising the technology between various regulators.

Starmer has promised to introduce new laws on AI, but his government is taking a careful approach to rolling them out.

"The UK's cautious, sector-based approach to AI regulation remains a crucial competitive advantage versus the EU, and any moves to change this regime should only be taken with the utmost caution," Benaich said.

But some AI experts say the rapid rollout of AI tools over the past 18 months only makes the need for new legislation more urgent.

Gaia Marcus, director of the Ada Lovelace Institute, said the government should bring forward a bill as soon as possible.

"These systems are already being integrated into our daily lives, our public services and our economy, bringing benefits and opportunity, but also posing a range of risks to people and society," she said.